180 research outputs found

    A Framework For Abstracting, Designing And Building Tangible Gesture Interactive Systems

    Get PDF
    This thesis discusses tangible gesture interaction, a novel paradigm for interacting with computer that blends concepts from the more popular fields of tangible interaction and gesture interaction. Taking advantage of the human innate abilities to manipulate physical objects and to communicate through gestures, tangible gesture interaction is particularly interesting for interacting in smart environments, bringing the interaction with computer beyond the screen, back to the real world. Since tangible gesture interaction is a relatively new field of research, this thesis presents a conceptual framework that aims at supporting future work in this field. The Tangible Gesture Interaction Framework provides support on three levels. First, it helps reflecting from a theoretical point of view on the different types of tangible gestures that can be designed, physically, through a taxonomy based on three components (move, hold and touch) and additional attributes, and semantically, through a taxonomy of the semantic constructs that can be used to associate meaning to tangible gestures. Second, it helps conceiving new tangible gesture interactive systems and designing new interactions based on gestures with objects, through dedicated guidelines for tangible gesture definition and common practices for different application domains. Third, it helps building new tangible gesture interactive systems supporting the choice between four different technological approaches (embedded and embodied, wearable, environmental or hybrid) and providing general guidance for the different approaches. As an application of this framework, this thesis presents also seven tangible gesture interactive systems for three different application domains, i.e., interacting with the In-Vehicle Infotainment System (IVIS) of the car, the emotional and interpersonal communication, and the interaction in a smart home. For the first application domain, four different systems that use gestures on the steering wheel as interaction means with the IVIS have been designed, developed and evaluated. For the second application domain, an anthropomorphic lamp able to recognize gestures that humans typically perform for interpersonal communication has been conceived and developed. A second system, based on smart t-shirts, recognizes when two people hug and reward the gesture with an exchange of digital information. Finally, a smart watch for recognizing gestures performed with objects held in the hand in the context of the smart home has been investigated. The analysis of existing systems found in literature and of the system developed during this thesis shows that the framework has a good descriptive and evaluative power. The applications developed during this thesis show that the proposed framework has also a good generative power.Questa tesi discute l’interazione gestuale tangibile, un nuovo paradigma per interagire con il computer che unisce i principi dei più comuni campi di studio dell’interazione tangibile e dell’interazione gestuale. Sfruttando le abilità innate dell’uomo di manipolare oggetti fisici e di comunicare con i gesti, l’interazione gestuale tangibile si rivela particolarmente interessante per interagire negli ambienti intelligenti, riportando l’attenzione sul nostro mondo reale, al di là dello schermo dei computer o degli smartphone. Poiché l’interazione gestuale tangibile è un campo di studio relativamente recente, questa tesi presenta un framework (quadro teorico) che ha lo scopo di assistere lavori futuri in questo campo. Il Framework per l’Interazione Gestuale Tangibile fornisce supporto su tre livelli. Per prima cosa, aiuta a riflettere da un punto di vista teorico sui diversi tipi di gesti tangibili che possono essere eseguiti fisicamente, grazie a una tassonomia basata su tre componenti (muovere, tenere, toccare) e attributi addizionali, e che possono essere concepiti semanticamente, grazie a una tassonomia di tutti i costrutti semantici che permettono di associare dei significati ai gesti tangibili. In secondo luogo, il framework proposto aiuta a concepire nuovi sistemi interattivi basati su gesti tangibili e a ideare nuove interazioni basate su gesti con gli oggetti, attraverso linee guida per la definizione di gesti tangibili e una selezione delle migliore pratiche per i differenti campi di applicazione. Infine, il framework aiuta a implementare nuovi sistemi interattivi basati su gesti tangibili, permettendo di scegliere tra quattro differenti approcci tecnologici (incarnato e integrato negli oggetti, indossabile, distribuito nell’ambiente, o ibrido) e fornendo una guida generale per la scelta tra questi differenti approcci. Come applicazione di questo framework, questa tesi presenta anche sette sistemi interattivi basati su gesti tangibili, realizzati per tre differenti campi di applicazione: l’interazione con i sistemi di infotainment degli autoveicoli, la comunicazione interpersonale delle emozioni, e l’interazione nella casa intelligente. Per il primo campo di applicazione, sono stati progettati, sviluppati e testati quattro differenti sistemi che usano gesti tangibili effettuati sul volante come modalità di interazione con il sistema di infotainment. Per il secondo campo di applicazione, è stata concepita e sviluppata una lampada antropomorfica in grado di riconoscere i gesti tipici dell’interazione interpersonale. Per lo stesso campo di applicazione, un secondo sistema, basato su una maglietta intelligente, riconosce quando due persone si abbracciano e ricompensa questo gesto con uno scambio di informazioni digitali. Infine, per l’interazione nella casa intelligente, è stata investigata la realizzazione di uno smart watch per il riconoscimento di gesti eseguiti con oggetti tenuti nella mano. L’analisi dei sistemi interattivi esistenti basati su gesti tangibili permette di dimostrare che il framework ha un buon potere descrittivo e valutativo. Le applicazioni sviluppate durante la tesi mostrano che il framework proposto ha anche un valido potere generativo

    DocMIR: An automatic document-based indexing system for meeting retrieval

    Get PDF
    This paper describes the DocMIR system which captures, analyzes and indexes automatically meetings, conferences, lectures, etc. by taking advantage of the documents projected (e.g. slideshows, budget tables, figures, etc.) during the events. For instance, the system can automatically apply the above-mentioned procedures to a lecture and automatically index the event according to the presented slides and their contents. For indexing, the system requires neither specific software installed on the presenter's computer nor any conscious intervention of the speaker throughout the presentation. The only material required by the system is the electronic presentation file of the speaker. Even if not provided, the system would temporally segment the presentation and offer a simple storyboard-like browsing interface. The system runs on several capture boxes connected to cameras and microphones that records events, synchronously. Once the recording is over, indexing is automatically performed by analyzing the content of the captured video containing projected documents and detects the scene changes, identifies the documents, computes their duration and extracts their textual content. Each of the captured images is identified from a repository containing all original electronic documents, captured audio-visual data and metadata created during post-production. The identification is based on documents' signatures, which hierarchically structure features from both layout structure and color distributions of the document images. Video segments are finally enriched with textual content of the identified original documents, which further facilitate the query and retrieval without using OCR. The signature-based indexing method proposed in this article is robust and works with low-resolution images and can be applied to several other applications including real-time document recognition, multimedia IR and augmented reality system

    Description languages for multimodal interaction: a set ofguidelines and its illustration with SMUIML

    Get PDF
    This article introduces the problem of modeling multimodal interaction, in the form of markup languages. After an analysis of the current state of the art in multimodal interaction description languages, nine guidelines for languages dedicated at multimodal interaction description are introduced, as well as four different roles that such language should target: communication, configuration, teaching and modeling. The article further presents the SMUIML language, our proposed solution to improve the time synchronicity aspect while still fulfilling other guidelines. SMUIML is finally mapped to these guidelines as a way to evaluate their spectrum and to sketch future work

    Design visual thinking tools for mixed initiative systems

    Get PDF

    Design visual thinking tools for mixed initiative systems

    Get PDF
    Visual thinking tools are visualization-enabled mixed initiative systems that empower people in solving complex problems by engaging them in the entire resolution process, suggesting appropriate actions with visual cues, and reducing their cognitive load with visual representations of their tasks. At the same time, the visual interaction style provides an alternative to the dialog-based model employed in most mixed-initiative (MI) systems. Visual thinking tools avoid complex analyses of turn taking, and put users in control all the time. We are especially interested in implementing visual "affordances" in such systems and present three examples used in COMIND, a visual MI system that we have developed. We show how humans can more effectively concentrate on synthesizing problems, selecting resolution paths that were unseen by the machine, and reformulating problems if solutions cannot be found or are unsatisfactory. We further discuss our evaluation of the techniques at the end of the paper

    Interactive problem solving via algorithm visualization

    Get PDF
    COMIND is a tool for conceptual design of industrial products. It helps designers define and evaluate the initial design space by using search algorithms to generate sets of feasible solutions. Two algorithm visualization techniques, Kaleidoscope and Lattice, and one visualization of n-dimensional data, MAP, are used to externalize the machine's problem solving strategies and the tradeoffs as a result of using these strategies. After a short training period, users are able to discover tactics to explore design space effectively, evaluate new design solutions, and learn important relationships among design criteria, search speed, and solution quality. We thus propose that visualization can serve as a tool for interactive intelligence, i.e., human-machine collaboration for solving complex problems

    Vasco~: Outil interactif pour l'exploration précoce des données

    Get PDF
    National audienceWe describe Vasco, a data visualization tool for inexperienced users. Vasco is designed to allow and promote early exploration of data, targeting users without experience in the design of visualizations and data analysis. Vasco structures the interface to select easily data and create visualizations, using panels and cards. Vasco automatically generates the best graphics according to the selection of variables and data morphology. In addition, Vasco helps give the user control and organization with multiple workspaces. Finally, a controlled study, that compares the usability of Vasco with Voyager 2, shows that users find helpful the presence of the dimensions data, plus the iterative nature of the exploration, help users understand the data visualization process, make them feel more confident and more performant.Nous décrivons Vasco, un outil de visualisation de données pour les utilisateurs inexpérimentés. Vasco est conçu pour permettre et promouvoir l'exploration précoce des données, en ciblant les utilisateurs sans expérience dans la conception de visualisations et l'analyse des données. Vasco structure l'interface pour sélectionner facilement les données et créer des visualisations à l'aide de panneaux et de cartes.Vasco génère automatiquement des graphiques adéquats en fonction des variables sélectionnées et de la morphologie des données. De plus, Vasco permet aux utilisateurs de contrôler et d'organiser leur processus de conception grâce à des espaces de travail multiples. Enfin, une étude contrôlée, qui compare l'utilisabilité de Vasco avec celle de Voyager 2, montre que la présentation constante de la taxonomie des données et la nature itérative de l'exploration aident les utilisateurs à comprendre le processus de visualisation des données, à se sentir plus confiants et plus performants

    Affective Interaction in Smart Environments

    Get PDF
    AbstractWe present a concept where the smart environments of the future will be able to provide ubiquitous affective communication. All the surfaces will become interactive and the furniture will display emotions. In particular, we present a first prototype that allows people to share their emotional states in a natural way. The input will be given through facial expressions and the output will be displayed in a context-aware multimodal way. Two novel output modalities are presented: a robotic painting that applies the concept of affective communication to the informative art and an RGB lamp that represents the emotions remaining in the user's peripheral attention. An observation study has been conducted during an interactive event and we report our preliminary findings in this paper

    Finding Information in Multimedia Records of Meetings

    Get PDF
    This paper overviews the work carried out within two large consortia on improving the access to records of human meetings using multimodal interfaces. The design of meeting browsers has emerged as an important goal, with both theoretical interest and practical applications. Meeting browsers are assistance tools that help humans navigate through multimedia records of meetings (audio, video, documents, and metadata), in order to obtain a general idea about what happened in a meeting or to find specific pieces of information, for discovery or verification. To explain the importance that meeting browsers have gained in time, the paper summarizes findings of user studies, discusses features of meeting browser prototypes, and outlines the main evaluation protocol proposed. Reference scores are provided for future benchmarking. These achievements in meeting browsing constitute an iterative software process, from user studies to prototypes and then to products
    • …
    corecore